Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 20
Filter
Add filters

Document Type
Year range
1.
2023 9th International Conference on Advanced Computing and Communication Systems, ICACCS 2023 ; : 2067-2071, 2023.
Article in English | Scopus | ID: covidwho-20243456

ABSTRACT

In today's computer systems, the mouse is an essential input device. Touch interfaces are high-contact planes that we use on a regular basis and frequently throughout the period. As a result, the input device gets infested with bacteria and pathogens. Despite the fact that wireless mouse have eliminated the bunch of tangled wires, there is still a desire to tap the gadget. In light of the epidemic, this proposed method employs a outlying webcam or an in-built image sensor to capture arm gestures and identify fingertip detection, allowing users to execute standard mouse activities such as left click, scrolling and other mouse activities. The algorithm is trained using machine learning with the use of image sensor and the fingers are identified efficiently. As a result, this reliance on corporeal devices to manage the computational system cancels out the requirement of man-machine interface. Thus the suggested approach will prevent the proliferation of Covid-19. © 2023 IEEE.

2.
CEUR Workshop Proceedings ; 3400:93-106, 2022.
Article in English | Scopus | ID: covidwho-20240174

ABSTRACT

In the field of explainable artificial intelligence (XAI), causal models and argumentation frameworks constitute two formal approaches that provide definitions of the notion of explanation. These symbolic approaches rely on logical formalisms to reason by abduction or to search for causalities, from the formal modeling of a problem or a situation. They are designed to satisfy properties that have been established as necessary based on the study of human-human explanations. As a consequence they appear to be particularly interesting for human-machine interactions as well. In this paper, we show the equivalence between a particular type of causal models, that we call argumentative causal graphs (ACG), and argumentation frameworks. We also propose a transformation between these two systems and look at how one definition of an explanation in the argumentation theory is transposed when moving to ACG. To illustrate our proposition, we use a very simplified version of a screening agent for COVID-19. © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0)

3.
1st international conference on Machine Intelligence and Computer Science Applications, ICMICSA 2022 ; 656 LNNS:119-128, 2023.
Article in English | Scopus | ID: covidwho-2294712

ABSTRACT

Hand gestures are part of communication tools that allows people to express their ideas and feelings. Those gestures can be used to insure a communication not only between people but also to replace traditional devices in human-machine interaction (HCI). This last leads us to use this technology in the E-learning domain. COVID'19 pandemic has attest the importance of E-learning. However, the Practical Activities (PA), as an important part of the learning process, are absent in the majority of E-learning plateforms. Therefore, this paper proposes a convolution neural network (CNN) method to ensure the detection of the hand gestures so the user can control and manipulate the virtual objects in the PA environment using a simple camera. To achieve this goal two datasets have been merged. Also the skin model and background subtraction were applied to obtain a performed training and testing datasets for the CNN. Experimental evaluation shows an accuracy rate of 97,2.%. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

4.
19th IEEE International Multi-Conference on Systems, Signals and Devices, SSD 2022 ; : 1956-1961, 2022.
Article in English | Scopus | ID: covidwho-2192063

ABSTRACT

Non-contact heart rate measurement reaches a higher level in scientific research;this field presents important advantages in our life, such as human-machine interaction, medical applications, especially in the current situation of the world suffering from COVID-19 pandemic. In recent years, several techniques of extracting the imaging photoplethysmography (ippg) signals from facial videos have been proposed and developed. Based on this, we performed a study and evaluation of the four most well known heart rate estimation methods such as Green, ICA, POS and CHROM in two accessible public datasets MAHNOB-HCI and UBFC-Phys under two different facial regions to enable researchers to develop and use them in real applications. The results show that the video imaging condition and the correct face region detection step play an important role in the accuracy of heart rate estimation. © 2022 IEEE.

5.
2022 IEEE International Conference on Internet of Things and Intelligence Systems, IoTaIS 2022 ; : 6-12, 2022.
Article in English | Scopus | ID: covidwho-2191962

ABSTRACT

Around the world, the number of senior citizens is increasing and shall continue to increase, and it is expected to be around 20 percent by 2050. Realizing its importance, the United Nations has identified Health and Wellness as one of the Sustainable Development Goals (SDG). The unfortunate pandemic situation due to the COVID-19 outbreak opened up new challenges for contact-less interactions and control of devices for ensuring the well being of citizens. In this paper, our main aim is to develop an intelligent framework based on a gesture-based interface that will help the senior citizens and physically challenged people interact and control different devices using only gestures. We focus on dynamic gesture recognition using a deep learning-based Convolutional Neural Network (CNN) model. The proposed system records continuous real-time data streams from non-invasive wearable sensors. This real-time continuous data stream is fragmented into data segments that are most likely to contain meaningful gesture data frames using the Adaptive Threshold Setting algorithm. The segmented data frames are provided as input to the CNN model to train, test, validate, and then classify it into predefined clusters, which are gestures. We have used the MPU6050 Inertial Measurement Unit based sensor model for collecting the data of the hand/ finger movement. The popular and widely used ESP8266 controller is used for data gathering, processing, and communicating. We created a dataset for 36 gestures, which includes ten digits and 26 English alphabets. For each gesture, a dataset of 300 samples has been created from 5 subjects of age group between 21-30. Thus, the final dataset consists of a total of 10800 samples belonging to 36 gestures. A total of six features comprising linear accelerations and angular rotation in 3-dimensional axes are used for training and validation. The proposed model can segment 93.75% of data segments correctly using the adaptive threshold selection algorithm, and the CNN classification algorithm can classify 98.67% gestures correctly. © 2022 IEEE.

6.
Sensors (Basel) ; 23(2)2023 Jan 12.
Article in English | MEDLINE | ID: covidwho-2200669

ABSTRACT

The COVID-19 pandemic created the need for telerehabilitation development, while Industry 4.0 brought the key technology. As motor therapy often requires the physical support of a patient's motion, combining robot-aided workouts with remote control is a promising solution. This may be realised with the use of the device's digital twin, so as to give it an immersive operation. This paper presents an extensive overview of this technology's applications within the fields of industry and health. It is followed by the in-depth analysis of needs in rehabilitation based on questionnaire research and bibliography review. As a result of these sections, the original concept of controlling a rehabilitation exoskeleton via its digital twin in the virtual reality is presented. The idea is assessed in terms of benefits and significant challenges regarding its application in real life. The presented aspects prove that it may be potentially used for manual remote kinesiotherapy, combined with the safety systems predicting potentially harmful situations. The concept is universally applicable to rehabilitation robots.


Subject(s)
COVID-19 , Exoskeleton Device , Robotics , Telerehabilitation , Humans , Pandemics
7.
Ieee Access ; 10:116402-116424, 2022.
Article in English | Web of Science | ID: covidwho-2123156

ABSTRACT

There has been a gigantic stir in the world's healthcare sector for the past couple of years with the advent of the Covid-19 pandemic. The healthcare system has suffered a major setback and, with the lack of doctors, nurses, and healthcare facilities the need for an intelligent healthcare system has come to the fore more than ever before. Smart healthcare technologies and AI/ML algorithms provide encouraging and favorable solutions to the healthcare sector's challenges. An Intelligent Human-Machine Interactive system is the need of the hour. This paper proposes a novel architecture for an Intelligent and Interactive Healthcare System that incorporates edge/fog/cloud computing techniques and focuses on Speech Recognition and its extensive application in an interactive system. The focal reason for using speech in the healthcare sector is that it is easily available and can easily predict any physical or psychological discomfort. Simply put, human speech is the most natural form of communication. The Hidden Markov Model is applied to process the proposed approach as using the probabilistic approach is more realistic for prediction purposes. Ongoing projects and directions for future work along with challenges/issues are also addressed.

8.
International Workshop on Artificial Intelligence for IT Operations, AIOps 2021, 3rd Workshop on Smart Data Integration and Processing, STRAPS 2021, International Workshop on AI-enabled Process Automation, AI-PA 2021 and Scientific Satellite Events held in conjunction with 19th International Conference on Service-Oriented Computing, ICSOC 2021 ; 13236 LNCS:363-376, 2022.
Article in English | Scopus | ID: covidwho-2013975

ABSTRACT

A service (social) robot is defined as the Internet of Things (IoT) consisting of a physical robot body that connects to one or more Cloud services to facilitate human-machine interaction activities to enhance the functionality of a traditional robot. Many studies found that anthropomorphic designs in robots resulted in greater user engagement. Humanoid service robots usually behave like natural social interaction partners for human users, with emotional features such as speech, gestures, and eye-gaze, referring to the users’ cultural and social background. During the COVID-19 pandemic, service robots play a much more critical role in helping to safeguard people in many countries nowadays. This paper gives an overview of the research issues from technical and social-technical perspectives, especially in Human-Robot Interaction (HRI), emotional expression, and cybersecurity issues, with a case study of gamification and service robots. © 2022, Springer Nature Switzerland AG.

9.
15th APCA International Conference on Automatic Control and Soft Computing, CONTROLO 2022 ; 930 LNEE:341-349, 2022.
Article in English | Scopus | ID: covidwho-1971538

ABSTRACT

We develop a human-machine interaction via dashboard for COVID-19 data visualization in the regions of Russia and the world. In particular, it includes an adaptive-compartmental multi-parametric model of the epidemic spread, which is a generalization of the classical SEIR models;and a module for visualizing and setting the parameters of this model according to epidemiological data, implemented in a dashboard. Data for testing have been collected since March 2020 on a daily basis from open Internet sources and placed on a “data farm” (an automated system for collecting, storing and pre-processing data from heterogeneous sources) hosted on a remote server. The combination of the proposed approach and its implementation in the form of a dashboard with the ability to conduct visual numerical experiments and compare them with real data allows most accurately tune the model parameters thus turning it into an intelligent system to support a decision-making. That is a small step towards Industry 5.0. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

10.
Adv Mater ; 34(35): e2204355, 2022 Sep.
Article in English | MEDLINE | ID: covidwho-1929751

ABSTRACT

Noncontact interactive technology provides an intelligent solution to mitigate public health risks from cross-infection in the era of COVID-19. The utilization of human radiation as a stimulus source is conducive to the implementation of low-power, robust noncontact human-machine interaction. However, the low radiation intensity emitted by humans puts forward a high demand for photodetection performance. Here, a SrTiO3-x /CuNi-heterostructure-based thermopile is constructed, which features the combination of high thermoelectric performance and near-unity long-wave infrared absorption, to realize the self-powered detection of human radiation. The response level of this thermopile to human radiation is orders of magnitude higher than those of low-dimensional-materials-based photothermoelectric detectors and even commercial thermopiles. Furthermore, a touchless input device based on the thermopile array is developed, which can recognize hand gestures, numbers, and letters in real-time. This work offers a reliable strategy to integrate the spontaneous human radiation into noncontact human-machine interaction systems.


Subject(s)
COVID-19 , Gestures , Humans , Light
11.
Production and Manufacturing Research-an Open Access Journal ; 10(1):410-427, 2022.
Article in English | Web of Science | ID: covidwho-1915482

ABSTRACT

A new Coronavirus disease 2019 has spread globally since 2019. Consequently, businesses from different sectors were forced to work remotely. At the same time, research in this area has seen a rise in studying and emerging technologies that allow and promote such a remote working style;not every sector is equipped for such a transition. The manufacturing sector especially, has faced challenges in this respect. This paper investigates the mental workload (MWL) of two groups of participants through a human-machine interaction task. Participants were required to bring a robotised cell to full production by tuning system and dispensing process parameters. Following the experiment, a self-assessment of the participants' perceived MWL using the raw NASA Task Load Index (RTLX) was collected. The results reveal that remote participants tend to have a lower perceived workload compared to the local participants, but mental demand was deemed higher while performance was rated lower.

12.
6th International Conference on Trends in Electronics and Informatics, ICOEI 2022 ; : 222-226, 2022.
Article in English | Scopus | ID: covidwho-1901450

ABSTRACT

In the modern era computers are becoming faster, smarter and better. Their usage is rising in the fields such as medicine, business administration, education etc. So there is a need for simplifying the operability and usability of a computer. Digital virtual assistants are known for easing the interaction with computers. Since most of the digital virtual assistants use voice as mode of communication deaf and dumb persons are finding it difficult to use virtual assistants on their devices. This research work attempts to propose a voice and gesture based virtual assistant that can be used by disabled as well as non disabled persons to perform common tasks on their computers. The main aim of this research paper is to develop natural human-machine interaction. Input to the virtual assistant is the user's choice. Users can choose to communicate using voice and gestures or operate a mouse pointer using gestures. In thee proposed system, gestures are recognized even in low light conditions. So, the proposed system is very much helpful in all day conditions. It can be of great help in times like Covid-19 pandemic to be contact free as well as for people with disabilities. © 2022 IEEE.

13.
Advanced Robotics ; : 1-17, 2022.
Article in English | Academic Search Complete | ID: covidwho-1873670

ABSTRACT

Regular physical activity reduces the risk of suffering obesity and high blood pressure, and slows down age-related loss of mobility and cognitive capabilities. However, 31% of the world population does not perform even the minimum recommend levels of physical activity to have a healthy life. On top of that, due to the COVID-19 Pandemic prevention measures involving isolation, lockdown, and working-from-home policies, adults have drastically reduced their physical activity by 30%, which further aggravates existing health conditions. In order to encourage exercising at home while still receiving proper instruction, this paper proposes a human-machine interface capable of supporting the motor learning of physical activities by providing training with constant practice of exercises and multimodal feedback. It consists of an interactive mixed-reality environment that does not require a human instructor or specialized facilities. As an application of the system, dance coaching was implemented. The information to be conveyed to the users are feet velocity and position trajectories, as well as the tempo of the desired motion. This is done by providing directional haptic feedback with wearable vibroactuators on the ankles of the user, visual feedback with a floor projection, and aural feedback with a metronome. In order to validate the proposed methodology, an experiment where ballroom dance is taught to 10 novice subjects was performed. Results show that when using the developed multimodal system, position and velocity trajectory errors are reduced by 60% and 37%, respectively, which demonstrates that users can understand and follow the multimodal feedback. After finishing the training and removing the system, users are still able to keep the position and velocity error at 61% and 42% lower than their initial performance, respectively. This fact suggests that subjects are able to retain the motor skills obtained during training. [ FROM AUTHOR] Copyright of Advanced Robotics is the property of Taylor & Francis Ltd and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full . (Copyright applies to all s.)

14.
5th IEEE International Conference on Robotic Computing, IRC 2021 ; : 87-91, 2021.
Article in English | Scopus | ID: covidwho-1779142

ABSTRACT

We introduce the multimodal interactive mobile robot ISOLDE, intended for use in hospitals, with the primary aim of helping healthcare staff to maintain social distancing during pandemics, such as the ongoing Covid-19 pandemic. ISOLDE also addresses the growing concern related to the use of black box models in artificial intelligence, especially in situations involving high-stakes decisions. Thus, ISOLDE's interactive capabilities have been implemented using a fully interpretable dialogue manager, making it easy to monitor and, if needed, correct the robot's actions, even for a non-expert. A use case is presented (in a laboratory setting) in which the robot successfully interacts with healthcare staff to carry out a requested transportation and delivery task, and also measuring a patient's temperature. © 2021 IEEE.

15.
8th International Conference on Signal Processing and Integrated Networks, SPIN 2021 ; : 909-915, 2021.
Article in English | Scopus | ID: covidwho-1752444

ABSTRACT

With the onset of Covid-19, interactions between humans and machines have increased at a rapid rate. Helping the machine identify the emotion and sentiment of the user plays a key role in making these interactions feel more natural. To do so, existing models for Speech Emotion Recognition (SER) and Sentiment Analysis (SA) focus on the detection of either only emotion or sentiment on acted databases. Unlike these existing works, this work presents a simple model with a comparatively small speech feature vector, to detect both emotion and sentiment from the spontaneous database, Multimodal Emotion Lines Dataset (MELD). This contains voice samples similar to those in a real-time environment. Speech features such as Mel Frequency Cepstral Coefficients (MFCC), Entropy, Teager Energy Operator have been extracted from the voice samples and are classified using Logit Boost, Logistic and Multiclass classifier. The performance of the model is improved by using feature selection techniques such as Backward elimination and Gaussian distribution coefficients. The proposed model is simple, and the results are comparable to existing work on the MELD database. © 2021 IEEE

16.
Sustainability ; 14(5):3064, 2022.
Article in English | ProQuest Central | ID: covidwho-1742678

ABSTRACT

Virtual reality (VR) is among the main technologies revolutionizing numerous sectors, including tourism. In the latter context, virtual tours (VTs) are finding increasing application. Providing an immersive and realistic human–machine interaction, VR tours can bring visitors to virtually experience destination areas. The proposed research presents a theoretical and empirical investigation of the role played by some technical VR features (i.e., presence, immersion, ease-of-use) on VR visitors’ enjoyment, satisfaction, and, accordingly, on the physical visit intention of the production site and neighboring areas. After having experienced a 360-degree VR tour of a food production site, created specifically for this study, 140 visitors were surveyed online. Results—emerging from a PLS structural equation model—show that immersion and presence both directly impact the enjoyment and indirectly the user’s VR tour satisfaction and visit intention. Further, if the VR tour is perceived as easy to use, it influences visitors’ satisfaction and physical visit intention. This study contributes to the novel VR literature, applied in the tourism sector, evidencing how immersive and enjoyable scenarios, experienced via widespread devices such as smartphones, may impact tourists’ choices. In food tourism, VR technologies can be fundamental in attracting new visitors to the production sites and neighboring areas.

17.
J Med Internet Res ; 23(11): e25745, 2021 11 04.
Article in English | MEDLINE | ID: covidwho-1547110

ABSTRACT

BACKGROUND: In the last decade, there has been a rapid increase in research on the use of artificial intelligence (AI) to improve child and youth participation in daily life activities, which is a key rehabilitation outcome. However, existing reviews place variable focus on participation, are narrow in scope, and are restricted to select diagnoses, hindering interpretability regarding the existing scope of AI applications that target the participation of children and youth in a pediatric rehabilitation setting. OBJECTIVE: The aim of this scoping review is to examine how AI is integrated into pediatric rehabilitation interventions targeting the participation of children and youth with disabilities or other diagnosed health conditions in valued activities. METHODS: We conducted a comprehensive literature search using established Applied Health Sciences and Computer Science databases. Two independent researchers screened and selected the studies based on a systematic procedure. Inclusion criteria were as follows: participation was an explicit study aim or outcome or the targeted focus of the AI application; AI was applied as part of the provided and tested intervention; children or youth with a disability or other diagnosed health conditions were the focus of either the study or AI application or both; and the study was published in English. Data were mapped according to the types of AI, the mode of delivery, the type of personalization, and whether the intervention addressed individual goal-setting. RESULTS: The literature search identified 3029 documents, of which 94 met the inclusion criteria. Most of the included studies used multiple applications of AI with the highest prevalence of robotics (72/94, 77%) and human-machine interaction (51/94, 54%). Regarding mode of delivery, most of the included studies described an intervention delivered in-person (84/94, 89%), and only 11% (10/94) were delivered remotely. Most interventions were tailored to groups of individuals (93/94, 99%). Only 1% (1/94) of interventions was tailored to patients' individually reported participation needs, and only one intervention (1/94, 1%) described individual goal-setting as part of their therapy process or intervention planning. CONCLUSIONS: There is an increasing amount of research on interventions using AI to target the participation of children and youth with disabilities or other diagnosed health conditions, supporting the potential of using AI in pediatric rehabilitation. On the basis of our results, 3 major gaps for further research and development were identified: a lack of remotely delivered participation-focused interventions using AI; a lack of individual goal-setting integrated in interventions; and a lack of interventions tailored to individually reported participation needs of children, youth, or families.


Subject(s)
Artificial Intelligence , Disabled Persons , Adolescent , Child , Delivery of Health Care , Humans
18.
Nano Res ; 15(3): 2616-2625, 2022.
Article in English | MEDLINE | ID: covidwho-1450016

ABSTRACT

If a person comes into contact with pathogens on public facilities, there is a threat of contact (skin/wound) infections. More urgently, there are also reports about COVID-19 coronavirus contact infection, which once again reminds that contact infection is a very easily overlooked disease exposure route. Herein, we propose an innovative implantation strategy to fabricate a multi-walled carbon nanotube/polyvinyl alcohol (MWCNT/PVA, MCP) interpenetrating interface to achieve flexibility, anti-damage, and non-contact sensing electronic skin (E-skin). Interestingly, the MCP E-skin had a fascinating non-contact sensing function, which can respond to the finger approaching 0-20 mm through the spatial weak field. This non-contact sensing can be applied urgently to human-machine interactions in public facilities to block pathogen. The scratches of the fruit knife did not damage the MCP E-skin, and can resist chemical corrosion after hydrophobic treatment. In addition, the MCP E-skin was developed to real-time monitor the respiratory and cough for exercise detection and disease diagnosis. Notably, the MCP E-skin has great potential for emergency applications in times of infectious disease pandemics. Electronic Supplementary Material: Supplementary material (fabrication of MCP E-skin, laser confocal tomography, parameter optimization, mechanical property characterization, finite element simulation, sensing mechanism, signal processing) is available in the online version of this article at 10.1007/s12274-021-3831-z.

19.
Am J Infect Control ; 50(6): 651-656, 2022 06.
Article in English | MEDLINE | ID: covidwho-1445240

ABSTRACT

BACKGROUND: Recently, innovative technologies for hand hygiene (HH) monitoring have been developed to improve HH adherence in health care. This study explored health care workers' experiences of using an electronic monitoring system to assess HH adherence. METHODS: An electronic monitoring system with digital feedback was installed on a surgical ward and interviews with health care workers using the system (n = 17) were conducted.  The data were analyzed according to grounded theory by Strauss and Corbin. RESULTS: Health care workers' experiences were expressed in terms of having trust in the monitoring system, requesting system functionality and ease of use and becoming aware of one's own performance. This resulted in the core category of learning to interact with new technology, summarized as the main strategy when using an electronic monitoring system in clinical settings. The system with digital feedback improved the awareness of HH and individual feedback was preferable to group feedback. CONCLUSIONS: Being involved in using and managing a technical innovation for assessing HH adherence in health care is a process of formulating a strategy for learning to interact with new technology. The importance of inviting health care workers to participate in the co-design of technical innovations is crucial, as it creates both trust in the innovation per se and trust in the process of learning how to use it.


Subject(s)
Cross Infection , Hand Hygiene , Cross Infection/prevention & control , Grounded Theory , Guideline Adherence , Hand Hygiene/methods , Health Personnel , Humans , Infection Control/methods
20.
Adv Mater ; 33(16): e2100218, 2021 Apr.
Article in English | MEDLINE | ID: covidwho-1121010

ABSTRACT

From typical electrical appliances to thriving intelligent robots, the exchange of information between humans and machines has mainly relied on the contact sensor medium. However, this kind of contact interaction can cause severe problems, such as inevitable mechanical wear and cross-infection of bacteria or viruses between the users, especially during the COVID-19 pandemic. Therefore, revolutionary noncontact human-machine interaction (HMI) is highly desired in remote online detection and noncontact control systems. In this study, a flexible high-sensitivity humidity sensor and array are presented, fabricated by anchoring multilayer graphene (MG) into electrospun polyamide (PA) 66. The sensor works in noncontact mode for asthma detection, via monitoring the respiration rate in real time, and remote alarm systems and provides touchless interfaces in medicine delivery for bedridden patients. The physical structure of the large specific surface area and the chemical structure of the abundant water-absorbing functional groups of the PA66 nanofiber networks contribute to the high performance synergistically. This work can lead to a new era of noncontact HMI without the risk of contagiousness and provide a general and effective strategy for the development of smart electronics that require noncontact interaction.


Subject(s)
Biosensing Techniques/methods , Electronics , Asthma/diagnosis , Biocompatible Materials/chemistry , Biosensing Techniques/instrumentation , Electrodes , Graphite/chemistry , Humans , Humidity , Internet of Things , Mobile Applications , Nanofibers/chemistry , Respiratory Rate , Wearable Electronic Devices
SELECTION OF CITATIONS
SEARCH DETAIL